63 research outputs found
Rational Krylov for Stieltjes matrix functions: convergence and pole selection
Evaluating the action of a matrix function on a vector, that is , is an ubiquitous task in applications. When is large, one
usually relies on Krylov projection methods. In this paper, we provide
effective choices for the poles of the rational Krylov method for approximating
when is either Cauchy-Stieltjes or Laplace-Stieltjes (or, which is
equivalent, completely monotonic) and is a positive definite
matrix. Relying on the same tools used to analyze the generic situation, we
then focus on the case , and
obtained vectorizing a low-rank matrix; this finds application, for instance,
in solving fractional diffusion equation on two-dimensional tensor grids. We
see how to leverage tensorized Krylov subspaces to exploit the Kronecker
structure and we introduce an error analysis for the numerical approximation of
. Pole selection strategies with explicit convergence bounds are given also
in this case
Fast Hessenberg reduction of some rank structured matrices
We develop two fast algorithms for Hessenberg reduction of a structured
matrix where is a real or unitary diagonal
matrix and . The proposed algorithm for the
real case exploits a two--stage approach by first reducing the matrix to a
generalized Hessenberg form and then completing the reduction by annihilation
of the unwanted sub-diagonals. It is shown that the novel method requires
arithmetic operations and it is significantly faster than other
reduction algorithms for rank structured matrices. The method is then extended
to the unitary plus low rank case by using a block analogue of the CMV form of
unitary matrices. It is shown that a block Lanczos-type procedure for the block
tridiagonalization of induces a structured reduction on in a block
staircase CMV--type shape. Then, we present a numerically stable method for
performing this reduction using unitary transformations and we show how to
generalize the sub-diagonal elimination to this shape, while still being able
to provide a condensed representation for the reduced matrix. In this way the
complexity still remains linear in and, moreover, the resulting algorithm
can be adapted to deal efficiently with block companion matrices.Comment: 25 page
Quasiseparable Hessenberg reduction of real diagonal plus low rank matrices and applications
We present a novel algorithm to perform the Hessenberg reduction of an
matrix of the form where is diagonal with
real entries and and are matrices with . The
algorithm has a cost of arithmetic operations and is based on the
quasiseparable matrix technology. Applications are shown to solving polynomial
eigenvalue problems and some numerical experiments are reported in order to
analyze the stability of the approac
Rational Krylov and ADI iteration for infinite size quasi-Toeplitz matrix equations
We consider a class of linear matrix equations involving semi-infinite
matrices which have a quasi-Toeplitz structure. These equations arise in
different settings, mostly connected with PDEs or the study of Markov chains
such as random walks on bidimensional lattices. We present the theory
justifying the existence in an appropriate Banach algebra which is
computationally treatable, and we propose several methods for their solutions.
We show how to adapt the ADI iteration to this particular infinite dimensional
setting, and how to construct rational Krylov methods. Convergence theory is
discussed, and numerical experiments validate the proposed approaches
Solving rank structured Sylvester and Lyapunov equations
We consider the problem of efficiently solving Sylvester and Lyapunov
equations of medium and large scale, in case of rank-structured data, i.e.,
when the coefficient matrices and the right-hand side have low-rank
off-diagonal blocks. This comprises problems with banded data, recently studied
by Haber and Verhaegen in "Sparse solution of the Lyapunov equation for
large-scale interconnected systems", Automatica, 2016, and by Palitta and
Simoncini in "Numerical methods for large-scale Lyapunov equations with
symmetric banded data", SISC, 2018, which often arise in the discretization of
elliptic PDEs.
We show that, under suitable assumptions, the quasiseparable structure is
guaranteed to be numerically present in the solution, and explicit novel
estimates of the numerical rank of the off-diagonal blocks are provided.
Efficient solution schemes that rely on the technology of hierarchical
matrices are described, and several numerical experiments confirm the
applicability and efficiency of the approaches. We develop a MATLAB toolbox
that allows easy replication of the experiments and a ready-to-use interface
for the solvers. The performances of the different approaches are compared, and
we show that the new methods described are efficient on several classes of
relevant problems
Low-rank updates and a divide-and-conquer method for linear matrix equations
Linear matrix equations, such as the Sylvester and Lyapunov equations, play
an important role in various applications, including the stability analysis and
dimensionality reduction of linear dynamical control systems and the solution
of partial differential equations. In this work, we present and analyze a new
algorithm, based on tensorized Krylov subspaces, for quickly updating the
solution of such a matrix equation when its coefficients undergo low-rank
changes. We demonstrate how our algorithm can be utilized to accelerate the
Newton method for solving continuous-time algebraic Riccati equations. Our
algorithm also forms the basis of a new divide-and-conquer approach for linear
matrix equations with coefficients that feature hierarchical low-rank
structure, such as HODLR, HSS, and banded matrices. Numerical experiments
demonstrate the advantages of divide-and-conquer over existing approaches, in
terms of computational time and memory consumption
A framework for structured linearizations of matrix polynomials in various bases
We present a framework for the construction of linearizations for scalar and
matrix polynomials based on dual bases which, in the case of orthogonal
polynomials, can be described by the associated recurrence relations. The
framework provides an extension of the classical linearization theory for
polynomials expressed in non-monomial bases and allows to represent polynomials
expressed in product families, that is as a linear combination of elements of
the form , where and
can either be polynomial bases or polynomial families
which satisfy some mild assumptions. We show that this general construction can
be used for many different purposes. Among them, we show how to linearize sums
of polynomials and rational functions expressed in different bases. As an
example, this allows to look for intersections of functions interpolated on
different nodes without converting them to the same basis. We then provide some
constructions for structured linearizations for -even and
-palindromic matrix polynomials. The extensions of these constructions
to -odd and -antipalindromic of odd degree is discussed and
follows immediately from the previous results
Efficient cyclic reduction for QBDs with rank structured blocks
We provide effective algorithms for solving block tridiagonal block Toeplitz
systems with quasiseparable blocks, as well as quadratic matrix
equations with quasiseparable coefficients, based on cyclic
reduction and on the technology of rank-structured matrices. The algorithms
rely on the exponential decay of the singular values of the off-diagonal
submatrices generated by cyclic reduction. We provide a formal proof of this
decay in the Markovian framework. The results of the numerical experiments that
we report confirm a significant speed up over the general algorithms, already
starting with the moderately small size
- …